Publication Date: 4/1/74
    Pages: 2
    Date Entered: 2/22/84
    Title: ASSESSMENT OF THE ASSUMPTION OF NORMALITY (EMPLOYING INDIVIDUAL OBSERVED VALUES)
    April 1974
    U.S. ATOMIC ENERGY COMMISSION
    REGULATORY GUIDE
    DIRECTORATE OF REGULATORY STANDARDS
    REGULATORY GUIDE 5.22
    ASSESSMENT OF THE ASSUMPTION OF
    NORMALITY (EMPLOYING INDIVIDUAL OBSERVED VALUES)A. INTRODUCTION
    Part 70 of Title 10 of the Code of Federal Regulations requires
    certain AEC special nuclear material (SNM) licensees to establish and
    maintain written material control and accounting procedures to enable
    the licensee to account for the special nuclear material in his
    possession. Part 70 also requires applicants for certain AEC licenses
    for SNM to submit to the Commission as part of the application a full
    description of such procedures, including statistical controls. The
    effectiveness of such controls depends greatly upon the validity of the
    statistical procedures applied. A key assumption often encountered in
    applications is that the measured value is a random variable that can be
    described by a normal or Gaussian distribution function. This guide
    identifies methods acceptable to the Regulatory staff for assessing the
    validity of such an assumption.
B. DISCUSSION
    The general role of statistical methodology in SNM accountability
    is to serve as the basis for evaluations which objectively provide
    assurance that material in the possession of licensees is accounted for
    effectively and that losses are localized when they occur.
    In maintaining material balances, it is usually impracticable to
    assay (observe) every item in a material process batch or lot
    (population). It would also be impossible, of course, to measure any
    material an infinite number of times. Accountability systems must
    therefore rely on observing a portion (sample) of the population being
    measured, from which general conclusions can be inferred about the
    population. Such statistically calculated inferences based on sample
    measurement data are basically predictions of what would be found if the
    sampled populations could be and were fully observed. The validity of
    inferences made on the basis of an assumed frequency function is
    dependent on the extent to which the assumed distribution describes the
    actual measurement process.
    One of the most frequent assumptions made in developing and
    applying statistical procedures for treating measurement data is that
    observations comprising a sample were drawn from a population which can
    be described by a normal distribution. The examination of a typical set
    of measurements which adequately represent the distribution will show
    the individual measurement values clustering around the average, forming
    a symmetrical bell-shaped pattern. Two parameters, the mean and the
    variance, are sufficient to define a normal distribution of a particular
    set of data under examination.
    Most statistical interval estimates and tests commonly used in
    practice are based on the assumption of normal distribution. Two
    features tend to promote the usage of the normal distribution, sometimes
    without sufficient investigation as to whether its assumption is
    justified. One is its usual applicability and the other is the fact
    that its mathematical operations are comparatively simple and well
    known. The basic procedural assumption that a measured random variable
    is normally distributed can often permit slight to moderate departures
    from normality without significantly affecting conclusions, depending on
    the statistical technique employed. However when decisions are based on
    the outcome of a statistical test, there is always a risk that a wrong
    conclusion has been reached, and any decision based on incomplete
    information runs the risk of either rejecting a true hypothesis or
    accepting a false one. The discernibility (power) of a statistical test
    will depend significantly upon the quantity and quality of data tested.
    A major departure from the normality assumption could result in
    misleading conclusions. For example, the level of control could be
    incorrectly determined and assessed if measurement errors are not
    characterized properly in the calculation of limits of error. This
    could seriously affect the facility's ability to control quantities of
    material unaccounted for and to evaluate their significance. Therefore,
    the testing of the assumption of normality merits consideration as a
    control procedure.
    Subcommittee N15-3 of the American National Standards Institute
    (ANSI) Standards Committee N15, Methods of Nuclear Materials Control,
    has developed a standard for assessing the assumption of normality
    (employing individual observed values). This standard has been
    designated ANSI N15.15-1973.(1)----------
    (1) Copies may be obtained from the American National Standards
    Institute, Inc., 1430 Broadway, New York, New York 10018.
    ----------
C. REGULATORY POSITION
    The statistical hypothesis testing techniques contained in the
    approved standard, ANSI N15.15-1973, "Assessment of the Assumption of
    Normality (Employing Individual Observed Values),"(1) are generally
    acceptable to the Regulatory staff for use in written procedures for the
    control of special nuclear material subject to the following. The
    interpretation of the significance of results from the application of
    the test of normality should consider the amount of data treated (sample
    size), in particular whether there are enough data available to reach a
    meaningful conclusion pertaining to the validity of the assumption for a
    given process.
    11